Investigations on discriminative training criteria

نویسنده

  • Ralf Schlüter
چکیده

In this work, a framework for efficient discriminative training and modeling is developed and implemented for both small and large vocabulary continuous speech recognition. Special attention will be directed to the comparison and formalization of varying discriminative training criteria and corresponding optimization methods, discriminative acoustic model evaluation and feature extraction. A formally unifying approach for a class of discriminative training criteria including Maximum Mutual Information (MMI) and Minimum Classification Error (MCE) criterion is presented, including the optimization methods gradient descent (GD) and extended Baum-Welch (EB) algorithm. Using discriminative criteria, novel approaches to splitting of mixture Gaussian densities and to linear feature transformation are derived. Furthermore, efficient algorithms for the application of discriminative training to speech recognition with both small and large vocabulary are developed. Finally, a novel evaluation method for the stochastic models used in speech recognition is derived using methods related to discriminative training. Experiments have been carried out on the TI digit string corpus for American English continuous digit strings, the SieTill corpus for telephone line recorded German continuous digit strings, the Verbmobil corpus for German spontaneous speech and the Wall Street Journal corpus for American English read speech. Zusammenfassung In dieser Arbeit wird ein Rahmen für effizientes diskriminatives Training entwickelt und für kontinuierliche Spracherkennung mit kleinen und großen Vokabularien implementiert. Besondere Aufmerksamkeit wird dabei auf den Vergleich und Formalisierung diverser diskriminativer Trainingskriterien und entsprechender Optimierungsmethoden, diskriminative Bewertung akustischer Modelle und die Merkmalsextraktion gelegt. Für eine Klasse diskriminativer Trainingskriterien wird ein formal einheitlicher Rahmen eingeführt, der unter anderem das Maximum Mutual Information (MMI) und das Minimum Classification Error (MCE) Kriterium enthält, und auch die entsprechenden Optimierungsmethoden Gradientenabstieg (GD) und den erweiterten Baum-Welch (EB) Algorithmus umfasst. Es werden neue diskriminative Ansätze zum Aufsplitten von Gaußschen Mischverteilungen und zum Training linearer Merkmalstransformationen vorgestellt. Des weiteren werden effiziente Algorithmen für die Anwendung von diskriminativem Training auf die Spracherkennung bei kleinem wie großem Vokabular entwickelt. Schließlich wird ein neuer Ansatz zur Bewertung der in der Spracherkennung verwendeten stochastischen Modelle vorgestellt, der auf Methoden aufbaut, die für das diskriminative Training entwickelt wurden. Experimente wurden auf dem TI digit string Korpus (Ziffernketten in amerikanischem Englisch), dem SieTill Korpus (deutsche Ziffernketten, Telefonqualität) durchgeführt, dem Wall Street Journal Korpus (gelesenes amerikanisches Englisch), sowie auf dem Verbmobil Korpus für deutsche Spontansprache durchgeführt.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Investigations on error minimizing training criteria for discriminative training in automatic speech recognition

Discriminative training criteria have been shown to consistently outperform maximum likelihood trained speech recognition systems. In this paper we employ the Minimum Classification Error (MCE) criterion to optimize the parameters of the acoustic model of a large scale speech recognition system. The statistics for both the correct and the competing model are solely collected on word lattices wi...

متن کامل

A log-linear discriminative modeling framework for speech recognition

Conventional speech recognition systems are based on Gaussian hidden Markov models (HMMs). Discriminative techniques such as log-linear modeling have been investigated in speech recognition only recently. This thesis establishes a log-linear modeling framework in the context of discriminative training criteria, with examples from continuous speech recognition, part-of-speech tagging, and handwr...

متن کامل

Investigations on discriminative training in large scale acoustic model estimation

In this paper two common discriminative training criteria, maximum mutual information (MMI) and minimum phone error (MPE), are investigated. Two main issues are addressed: sensitivity to different lattice segmentations and the contribution of the parameter estimation method. It is noted that MMI andMPE may benefit from different lattice segmentation strategies. The use of discriminative criteri...

متن کامل

Posterior-Scaled MPE: Novel Discriminative Training Criteria

We recently discovered novel discriminative training criteria following a principled approach. In this approach training criteria are developed from error bounds on the global error for pattern classification tasks that depend on non-trivial loss functions. Automatic speech recognition (ASR) is a prominent example for such a task depending on the non-trivial Levenshtein loss. In this context, t...

متن کامل

دو روش تبدیل ویژگی مبتنی بر الگوریتم های ژنتیک برای کاهش خطای دسته بندی ماشین بردار پشتیبان

Discriminative methods are used for increasing pattern recognition and classification accuracy. These methods can be used as discriminant transformations applied to features or they can be used as discriminative learning algorithms for the classifiers. Usually, discriminative transformations criteria are different from the criteria of  discriminant classifiers training or  their error. In this ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000